Exploring a unified low rank representation for multi-focus image fusion
نویسندگان
چکیده
Recent years have witnessed a trend that uses image representation models, including sparse (SR), low-rank (LRR) and their variants for multi-focus fusion. Despite the thrilling preliminary results, existing methods conduct fusion patch by patch, leading to insufficient consideration of spatial consistency among patches within local region or an object. As result, not only artifacts are easily introduced fused but also “jagged” frequently arise on boundaries between focused regions de-focused regions, which is inherent problem in these patch-based methods.Aiming address above problems, we propose, this paper,a new method integrating super-pixel clustering unified LRR (ULRR) model. The entire algorithm carried out three steps. In first step, source segmented into few super-pixels with irregular sizes, rather than regular diminish meanwhile preserve objects image. Secondly, clustering-based strategy employed further reduce images. This achieved using proposed ULRR model, imposes constraints onto each cluster.Thisis apparently more reasonable those images complicated scenes. Moreover, Laplacianregularization term incorporated model ensure same cluster. Finally, measure focus defined seek as well thesource via jointly coefficients errors derived from Extensive experiments been conducted results demonstrate superiorities diminishing artifactsin boundary compared state-of-the-art algorithms.
منابع مشابه
Deep Unsupervised Domain Adaptation for Image Classification via Low Rank Representation Learning
Domain adaptation is a powerful technique given a wide amount of labeled data from similar attributes in different domains. In real-world applications, there is a huge number of data but almost more of them are unlabeled. It is effective in image classification where it is expensive and time-consuming to obtain adequate label data. We propose a novel method named DALRRL, which consists of deep ...
متن کاملMulti Focus Image Fusion Using Joint Sparse Representation
The main objective is to bring up a highly informative image as result using image fusion. In this paper, A unique method for image fusion using sparse representation is used. The sparse coefficients are used as image features. The source image could be identified with common features and innovative features. The use of sparse representation for the extraction of common and innovative features,...
متن کاملPattern Selective Image Fusion for Multi-focus Image Reconstruction
This paper presents a method for removing focal blur in images using fusion algorithm. Different focal images can be combined to give a single fused image with all focused information. Pattern selective image fusion provides a mechanism for combining multiple images through identifying salient features in the source images and combining those features in to a single fused image. However one of ...
متن کاملEfficient Compressive Multi-Focus Image Fusion
Two key points of pixel-level multi-focus image fusion are the clarity measure and the pixel coefficients fusion rule. Along with different improvements on these two points, various fusion schemes have been proposed in literatures. However, the traditional clarity measures are not designed for compressive imaging measurements which are maps of source sense with random or likely random measureme...
متن کاملMulti-focus image fusion using clarity map
A novel method to compute the clarity map from single image based on the local maxima and minima information inside image is proposed in this paper. The well-focused region can be easily identified according to the significant pixels inside the clarity map of image. Our method utilizes the clarity map difference between two source images as the focus criteria to implement multi-focus image fusi...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Pattern Recognition
سال: 2021
ISSN: ['1873-5142', '0031-3203']
DOI: https://doi.org/10.1016/j.patcog.2020.107752